38 research outputs found

    Which Are the Main Surface Disinfection Approaches at the Time of SARS-CoV-2?

    Get PDF
    Among many guidelines issued by the World Health Organization to prevent contagion from novel coronavirus (SARS-CoV-2), disinfection of animate and inanimate surfaces has emerged as a key issue. One effective approach to prevent its propagation can be achieved by disinfecting air, skin, or surfaces. A thorough and rational application of an Environmental Protection Agent for disinfection of surfaces, as well as a good personal hygiene, including cleaning hands with appropriate products (e.g., 60–90% alcohol-based product) should minimize transmission of viral respiratory pathogens such as SARS-CoV- 2. Critical issues, associated with the potential health hazard of chemical disinfectants and the ineffective duration of most of the treatments, have fostered the introduction of innovative and alternative disinfection approaches. The present review aims to provide an outline of methods currently used for inanimate surface disinfection with a look to the future and a focus on the development of innovative and effective disinfection approaches (e.g., metal nanoparticles, photocatalysis, self-cleaning, and self-disinfection) with particular focus on SARS-CoV-2. The research reviews are, usually, focused on a specific category of disinfection methods, and therefore they are limited. On the contrary, a panoramic review with a wider focus, as the one here proposed, can be an added value for operators in the sector and generally for the scientific community

    Which Are the Main Surface Disinfection Approaches at the Time of SARS-CoV-2?

    Get PDF
    Among many guidelines issued by the World Health Organization to prevent contagion from novel coronavirus (SARS-CoV-2), disinfection of animate and inanimate surfaces has emerged as a key issue. One effective approach to prevent its propagation can be achieved by disinfecting air, skin, or surfaces. A thorough and rational application of an Environmental Protection Agent for disinfection of surfaces, as well as a good personal hygiene, including cleaning hands with appropriate products (e.g., 60\u201390% alcohol-based product) should minimize transmission of viral respiratory pathogens such as SARS-CoV-2. Critical issues, associated with the potential health hazard of chemical disinfectants and the ineffective duration of most of the treatments, have fostered the introduction of innovative and alternative disinfection approaches. The present review aims to provide an outline of methods currently used for inanimate surface disinfection with a look to the future and a focus on the development of innovative and effective disinfection approaches (e.g., metal nanoparticles, photocatalysis, self-cleaning, and self-disinfection) with particular focus on SARS-CoV-2. The research reviews are, usually, focused on a specific category of disinfection methods, and therefore they are limited. On the contrary, a panoramic review with a wider focus, as the one here proposed, can be an added value for operators in the sector and generally for the scientific community

    Study of the Synthetic Approach Influence in Ni/CeO2-Based Catalysts for Methane Dry Reforming

    Get PDF
    This study focuses on the synthetic approach influence in morphostructural features and catalytic performances for Ni/CeO2 catalysts. Incipient wetness impregnation, coprecipitation and nitrate combustion were studied as catalyst preparation approaches, and the materials were then tested at 700 C for methane dry reforming (MDR). The morphostructural properties of the materials were deeply studied using several techniques, such as temperature programmed reduction (TPR), to investigate reducibility and support-metal interaction, N2 physisorption to evaluate the porosity and the surface area, scanning electron microscopy (SEM) and X-ray diffraction (XRD) to estimate Ni dispersion, and temperature programmed oxidation (TPO) to identify the type and amount of coke formed on catalysts’ surface after reaction. From the data obtained, coprecipitation turned out to be the most suitable technique for this application because this catalyst was able to reach 70% of CO2 conversion and 30% methane conversion, with an H2 yield of 15% and 30% yield of CO at the end of the 30 h test. Moreover, it was also the catalyst with the highest metal dispersion, the strongest interaction with the support, and the lowest coke deposition

    Improved Microarray-Based Decision Support with Graph Encoded Interactome Data

    Get PDF
    In the past, microarray studies have been criticized due to noise and the limited overlap between gene signatures. Prior biological knowledge should therefore be incorporated as side information in models based on gene expression data to improve the accuracy of diagnosis and prognosis in cancer. As prior knowledge, we investigated interaction and pathway information from the human interactome on different aspects of biological systems. By exploiting the properties of kernel methods, relations between genes with similar functions but active in alternative pathways could be incorporated in a support vector machine classifier based on spectral graph theory. Using 10 microarray data sets, we first reduced the number of data sources relevant for multiple cancer types and outcomes. Three sources on metabolic pathway information (KEGG), protein-protein interactions (OPHID) and miRNA-gene targeting (microRNA.org) outperformed the other sources with regard to the considered class of models. Both fixed and adaptive approaches were subsequently considered to combine the three corresponding classifiers. Averaging the predictions of these classifiers performed best and was significantly better than the model based on microarray data only. These results were confirmed on 6 validation microarray sets, with a significantly improved performance in 4 of them. Integrating interactome data thus improves classification of cancer outcome for the investigated microarray technologies and cancer types. Moreover, this strategy can be incorporated in any kernel method or non-linear version of a non-kernel method

    Kernels and Tensors for Structured Data Modelling (Kernels en tensoren voor het modelleren van gestructureerde data)

    No full text
    A key ingredient to improve the generalization of machine learning algorithms is to convey prior information, either by choosing appropriate input representations or by tailored regularization schemes. This becomes of paramount importance in all the applications where the number of available observations for training is limited. In many of such cases, data are structured and can be conveniently represented as higher order arrays (tensors). The scope of this thesis is the development of learning algorithms that exploit the structural information of these arrays to improve generalization. This is achieved by combining tensor-based methods with kernels, convex optimization, sparsity and statistical learning principles.As a first contribution we present a parametric framework based on convex optimization and spectral regularization. We give a mathematical characterization of spectral penalties for tensors and analyze a unifying class of convex optimization problems for which we present a new, provably convergent and scalable template algorithm. We then specialize this class of problems to perform learning both in a transductive as well as in an inductive setting. In the transductive case one has an input data tensor with missing features and, possibly, a partially observed matrix of labels. The goal is to both infer the missing input features as well as predict the missing labels. For induction, the goal is to determine a parametric model for each learning task to be used for out of sample prediction. Each training pair consists of a multidimensional array and a set of labels each of which corresponding to related but distinct tasks. As a by-product of using a tensor-based formalism, our approach enables one to tackle multiple tasks simultaneously in a natural way. Empirical studies demonstrate the merits of the proposed methods.Parametric tensor-based techniques present a number of advantages; in particular, they often lead to interpretable models which is a desirable feature in a number of applications of interest. However they constitute a somewhat restricted class that might suffer from limited predictive power. A second contribution of this thesis is to go beyond this limitation by introducing nonparametric tensor-based models. To this end we discuss two different ideas. The first approach is based on an explicit multi-way feature representation. The latter is found as the minimum norm solution of an operatorial equation and carries structural information from the input data representation. A main drawback is that estimation within this feature space results into non-convex and non-scalable problems. The second approach fits into the same primal-dual framework underlying SVM-like algorithms and allows the efficient estimation of nonparametric tensor-based models. Although specialized kernels exist for certain classes of structured data, no existing approach exploits the structure of tensorial representations. We go beyond this limitation by proposing a class of tensorial kernels that links to the multilinear singular valuedecomposition (MLSVD) and study properties of the proposed similarity measure.The tensorial kernel is a special case of a more general class of product kernels. Product kernels, including the widely used Gaussian RBF kernel, play a special role in nonparametric statistics and machine learning. At a more fundamental level, we elaborate on the link between tensors and kernels. We show that, on the one hand, spaces of finite dimensional tensors can be regarded as RKHSs associated to product kernels. On the other hand, the Hilbert space of multilinear functionals associated to general product kernels can be regarded as a space of infinite dimensional tensors.Many objects of interest, such as videos and colored images, admit a natural tensorial representation. Additionally, tensor representations naturally result from the experiments performed in a number of fields. On top of this, there are cases where one can explicitly carry on tensor transformations with the purpose of exploiting the spectral content of these new representations. We show that one of such transformations can be used for learning when input data are multivariate time series. We represent these objects by cumulant tensors and train classifiers based on tensorial kernels. Contrary to existing approaches the arising procedure does not require an (often nontrivial) blind identification step. Nonetheless, insightful connections with the dynamics of the generating systems can be drawn under specific modeling assumptions. The approach is illustrated on a brain decoding task where the direction, either left of right, towards where the subject modulates attention is predicted from magnetoencephalography (MEG) signals.nrpages: 242status: publishe

    DynOpt : Incorporating dynamics into mean-variance portfolio optimization

    No full text
    Mean-variance (MV) portfolio theory leads to relatively simple and elegant numerical problems. Nonetheless, the approach has been criticized for treating the market parameters as if they were constant over time. We propose a novel convex optimization problem that extends an existing MV formulation with chance constraint(s) by accounting for the portfolio dynamics. The core idea is to consider a multiperiod scenario where portfolio weights are implicitly regarded as the output of a state-space dynamical system driven by external inputs. The approach leverages a result on realization theory and uses the nuclear norm to penalize complex dynamical behaviors. The proposed ideas are illustrated by two case studies. © 2013 IEEE.status: publishe

    Kernel Methods

    No full text
    © Springer-Verlag Berlin Heidelberg 2015. This chapter addresses the study of kernel methods, a class of techniques that play a major role in machine learning and nonparametric statistics. Among others, these methods include support vector machines (SVMs) and least squares SVMs, kernel principal component analysis, kernel Fisher discriminant analysis, and Gaussian processes. The use of kernel methods is systematic and properly motivated by statistical principles. In practical applications, kernel methods lead to flexible predictive models that often outperform competing approaches in terms of generalization performance. The core idea consists of mapping data into a high-dimensional space by means of a feature map. Since the feature map is normally chosen to be nonlinear, a linear model in the feature space corresponds to a nonlinear rule in the original domain. This fact suits many real world data analysis problems that often require nonlinear models to describe their structure. In Sect. 32.1 we present historical notes and summarize the main ingredients of kernel methods. In Sect. 32.2 we present the core ideas of statistical learning and show how regularization can be employed to devise practical learning algorithms. In Sect. 32.3 we show a selection of techniques that are representative of a large class of kernel methods; these techniques - termed primal-dual methods - use Lagrange duality as the main mathematical tools. Section 32.4 discusses Gaussian processes, a class of kernel methods that uses a Bayesian approach to perform inference and learning. Section 32.5 recalls different approaches for the tuning of parameters. In Sect. 32.6 we review the mathematical properties of different yet equivalent notions of kernels and recall a number of specialized kernels for learning problems involving structured data. We conclude the chapter by presenting applications in Sect. 32.7.status: publishe
    corecore